137 research outputs found

    A comparison of underwater visual distance estimates made by scuba divers and a stereo-video system: Implications for underwater visual census of reef fish abundance

    Get PDF
    Underwater visual census of reef fish by scuba divers is a widely used and useful technique for assessing the composition and abundance of reef fish assemblages, but suffers from several biases and errors. We compare the accuracy of underwater visual estimates of distance made by novice and experienced scientific divers and an underwater stereo-video system. We demonstrate the potential implications that distance errors may have on underwater visual census assessments of reef fish abundance. We also investigate how the accuracy and precision of scuba diver length estimates of fish is affected as distance increases. Distance was underestimated by both experienced ( mean relative error = -11.7%, s.d. = 21.4%) and novice scientific divers (mean relative error = -5.0%, s. d. =17.9%). For experienced scientific divers this error may potentially result in an 82% underestimate or 194% overestimate of the actual area censused, which will affect estimates of fish density. The stereo-video system also underestimated distance but to a much lesser degree (mean relative error = -0.9%, s.d. = 2.6%) and with less variability than the divers. There was no correlation between the relative error of length estimates and the distance of the fish away from the observer

    High Resolution Surface Reconstruction of Cultural Heritage Objects Using Shape from Polarization Method

    Get PDF
    Nowadays, three-dimensional reconstruction is used in various fields like computer vision, computer graphics, mixed reality and digital twin. The three- dimensional reconstruction of cultural heritage objects is one of the most important applications in this area which is usually accomplished by close range photogrammetry. The problem here is that the images are often noisy, and the dense image matching method has significant limitations to reconstruct the geometric details of cultural heritage objects in practice. Therefore, displaying high-level details in three-dimensional models, especially for cultural heritage objects, is a severe challenge in this field. In this paper, the shape from polarization method has been investigated, a passive method with no drawbacks of active methods. In this method, the resolution of the depth maps can be dramatically increased using the information obtained from the polarization light by rotating a linear polarizing filter in front of a digital camera. Through these polarized images, the surface details of the object can be reconstructed locally with high accuracy. The fusion of polarization and photogrammetric methods is an appropriate solution for achieving high resolution three-dimensional reconstruction. The surface reconstruction assessments have been performed visually and quantitatively. The evaluations showed that the proposed method could significantly reconstruct the surfaces' details in the three-dimensional model compared to the photogrammetric method with 10 times higher depth resolution

    Evaluation of penalty functions for semi-global matching cost aggregation

    Get PDF
    The stereo matching method semi-global matching (SGM) relies on consistency constraints during the cost aggregation which are enforced by so-called penalty terms. This paper proposes new and evaluates four penalty functions for SGM. Due to mutual dependencies, two types of matching cost calculation, census and rank transform, are considered. Performance is measured using original and degenerated images exhibiting radiometric changes and noise from the Middlebury benchmark. The two best performing penalty functions are inversely proportional and negatively linear to the intensity gradient and perform equally with 6.05 % and 5.91 % average error, respectively. The experiments also show that adaptive penalty terms are mandatory when dealing with difficult imaging conditions. Consequently, for highest algorithmic performance in real-world systems, selection of a suitable penalty function and thorough parametrization with respect to the expected image quality is essential.Stifterverband fĂŒr die deutsche Wissenschaf

    Matching persistent scatterers to buildings

    Get PDF
    Persistent Scatterer Interferometry (PSI) is by now a mature technique for the estimation of surface deformation in urban areas. In contrast to the classical interferometry a stack of interferograms is used to minimize the influence of atmospheric disturbances and to select a set of temporarily stable radar targets, the so called Persistent Scatterers (PS). As a result the deformation time series and the height for all identified PS are obtained with high accuracy. The achievable PS density depends thereby on the characteristics of the scene at hand and on the spatial resolution of the used SAR data. This means especially that the location of PS cannot be chosen by the operator and consequently deformation processes of interest may be spatially undersampled and not retrievable from the data. In case of the newly available high resolution SAR data, offering a ground resolution around one metre, the sampling is potentially dense enough to enable a monitoring of single buildings. However, the number of PS to be found on a single building highly depends on its orientation to the viewing direction of the sensor, its facade and roof structure, and also the surrounding buildings. It is thus of major importance to assess the PS density for the buildings in a scene for real world monitoring scenarios. Besides that it is interesting from a scientific point of view to investigate the factors influencing the PS density. In this work, we fuse building outlines (i.e. 2D GIS data) with a geocoded PS point cloud, which consists mainly in estimating and removing a shift between both datasets. After alignment of both datasets, the PS are assigned to buildings, which is in turn used to determine the PS density per building. The resulting map is a helpful tool to investigate the factors influencing PS density at buildings

    High speed videometric monitoring of rock breakage

    Get PDF
    Estimation of rock breakage characteristics plays an important role in optimising various industrial and mining processes used for rock comminution. Although little research has been undertaken into 3D photogrammetric measurement of the progeny kinematics, there is promising potential to improve the efficacy of rock breakage characterisation. In this study, the observation of progeny kinematics was conducted using a high speed, stereo videometric system based on laboratory experiments with a drop weight impact testing system. By manually tracking individual progeny through the captured video sequences, observed progeny coordinates can be used to determine 3D trajectories and velocities, supporting the idea that high speed video can be used for rock breakage characterisation purposes. An analysis of the results showed that the high speed videometric system successfully observed progeny trajectories and showed clear projection of the progeny away from the impact location. Velocities of the progeny could also be determined based on the trajectories and the video frame rate. These results were obtained despite the limitations of the photogrammetric system and experiment processes observed in this study. Accordingly there is sufficient evidence to conclude that high speed videometric systems are capable of observing progeny kinematics from drop weight impact tests. With further optimisation of the systems and processes used, there is potential for improving the efficacy of rock breakage characterisation from measurements with high speed videometric systems

    Feature evaluation for building facade images - an empirical study

    Get PDF
    The classification of building facade images is a challenging problem that receives a great deal of attention in the photogrammetry community. Image classification is critically dependent on the features. In this paper, we perform an empirical feature evaluation task for building facade images. Feature sets we choose are basic features, color features, histogram features, Peucker features, texture features, and SIFT features. We present an approach for region-wise labeling using an efficient randomized decision forest classifier and local features. We conduct our experiments with building facade image classification on the eTRIMS dataset, where our focus is the object classes building, car, door, pavement, road, sky, vegetation, and window

    Image selection in photogrammetric multi-view stereo methods for metric and complete 3D reconstruction

    Get PDF
    Multi-View Stereo (MVS) as a low cost technique for precise 3D reconstruction can be a rival for laser scanners if the scale of the model is resolved. A fusion of stereo imaging equipment with photogrammetric bundle adjustment and MVS methods, known as photogrammetric MVS, can generate correctly scaled 3D models without using any known object distances. Although a huge number of stereo images (e.g. 200 high resolution images from a small object) captured of the object contains redundant data that allows detailed and accurate 3D reconstruction, the capture and processing time is increased when a vast amount of high resolution images are employed. Moreover, some parts of the object are often missing due to the lack of coverage of all areas. These problems demand a logical selection of the most suitable stereo camera views from the large image dataset. This paper presents a method for clustering and choosing optimal stereo or optionally single images from a large image dataset. The approach focusses on the two key steps of image clustering and iterative image selection. The method is developed within a software application called Imaging Network Designer (IND) and tested by the 3D recording of a gearbox and three metric reference objects. A comparison is made between IND and CMVS, which is a free package for selecting vantage images. The final 3D models obtained from the IND and CMVS approaches are compared with datasets generated with an MMDx Nikon Laser scanner. Results demonstrate that IND can provide a better image selection for MVS than CMVS in terms of surface coordinate uncertainty and completeness. © 2013 SPIE

    Towards automating underwater measurement of fish length: a comparison of semi-automatic and manual stereo–video measurements

    Get PDF
    Underwater stereo–video systems are widely used for counting and measuring fish in aquaculture, fisheries, and conservation management. Length measurements are generated from stereo–video recordings by a software operator using a mouse to locate the head and tail of a fish in synchronized pairs of images. This data can be used to compare spatial and temporal changes in the mean length and biomass or frequency distributions of populations of fishes. Since the early 1990s stereo–video has also been used for measuring the lengths of fish in aquaculture for quota and farm management. However, the costs of the equipment, software, the time, and salary costs involved in post processing imagery manually and the subsequent delays in the availability of length information inhibit the adoption of this technology. We present a semi-automatic method for capturing stereo–video measurements to estimate the lengths of fish. We compare the time taken to make measurements of the same fish measured manually from stereo–video imagery to that measured semi-automatically. Using imagery recorded during transfers of Southern Bluefin Tuna (SBT) from tow cages to grow out cages, we demonstrate that the semi-automatic algorithm developed can obtain fork length measurements with an error of less than 1% of the true length and with at least a sixfold reduction in operator time in comparison to manual measurements. Of the 22 138 SBT recorded we were able to measure 52.6% (11 647) manually and 11.8% (2614) semi-automatically. For seven of the eight cage transfers recorde,d there were no statistical differences in the mean length, weight, or length frequency between manual and semi-automatic measurements. When the data were pooled across the eight cage transfers, there was no statistical difference in mean length or weight between the stereo–video-based manual and semi-automated measurements. Hence, the presented semi-automatic system can be deployed to significantly reduce the cost involved in adoption of stereo–video technology

    A field and video-annotation guide for baited remote underwater stereo-video surveys of demersal fish assemblages

    Get PDF
    Researchers TL, BG, JW, NB and JM were supported by the Marine Biodiversity Hub through funding from the Australian Government's National Environmental Science Program. Data validation scripts and GlobalArchive.org were supported by the Australian Research Data Commons, the Gorgon-Barrow Island Gorgon Barrow Island Net Conservation Benefits Fund, administered by the Government of Western Australia and the BHP/UWA Biodiversity and Societal Benefits of Restricted Access Areas collaboration.1. Baited remote underwater stereo-video systems (stereo-BRUVs) are a popular tool to sample demersal fish assemblages and gather data on their relative abundance and body-size structure in a robust, cost-effective, and non-invasive manner. Given the rapid uptake of the method, subtle differences have emerged in the way stereo-BRUVs are deployed and how the resulting imagery are annotated. These disparities limit the interoperability of datasets obtained across studies, preventing broad-scale insights into the dynamics of ecological systems. 2. We provide the first globally accepted guide for using stereo-BRUVs to survey demersal fish assemblages and associated benthic habitats. 3. Information on stereo-BRUV design, camera settings, field operations, and image annotation are outlined. Additionally, we provide links to protocols for data validation, archiving, and sharing. 4. Globally, the use of stereo-BRUVs is spreading rapidly. We provide a standardised protocol that will reduce methodological variation among researchers and encourage the use of Findable, Accessible, Interoperable, and Reproducible (FAIR) workflows to increase the ability to synthesise global datasets and answer a broad suite of ecological questions.Publisher PDFPeer reviewe

    CANELC: constructing an e-language corpus

    Get PDF
    This paper reports on the construction of CANELC: the Cambridge and Nottingham e-language Corpus.3 CANELC is a one million word corpus of digital communication in English, taken from online discussion boards, blogs, tweets, emails and SMS messages. The paper outlines the approaches used when planning the corpus: obtaining consent; collecting the data and compiling the corpus database. This is followed by a detailed analysis of some of the patterns of language used in the corpus. The analysis includes a discussion of the key words and phrases used as well as the common themes and semantic associations connected with the data. These discussions form the basis of an investigation of how e-language operates in both similar and different ways to spoken and written records of communication (as evidenced by the BNC - British National Corpus). 3 CANELC stands for Cambridge and Nottingham e-language Corpus. This corpus has been built as part of a collaborative project between The University of Nottingham and Cambridge University Press with whom sole copyright of the annotated corpus resides. CANELC comprises one-million words of digital English taken from SMS messages, blogs, tweets, discussion board content and private/business emails. Plans to extend the corpus are under discussion. The legal dimension to corpus ‘ownership’ of some forms of unannotated data is a complex one and is under constant review. At the present time the annotated corpus is only available to authors and researchers working for CUP and is not more generally available
    • 

    corecore